Video representation learning has been successful in video-text pre-training for zero-shot transfer, where each sentence is trained to be close to the paired video clips in a common feature space. For long videos, given a paragraph of description where the sentences describe different segments of the video, by matching all sentence-clip pairs, the paragraph and the full video are aligned implicitly. However, such unit-level similarity measure may ignore the global temporal context over a long time span, which inevitably limits the generalization ability. In this paper, we propose a contrastive learning framework TempCLR to compare the full video and the paragraph explicitly. As the video/paragraph is formulated as a sequence of clips/sentences, under the constraint of their temporal order, we use dynamic time warping to compute the minimum cumulative cost over sentence-clip pairs as the sequence-level distance. To explore the temporal dynamics, we break the consistency of temporal order by shuffling the video clips or sentences according to the temporal granularity. In this way, we obtain the representations for clips/sentences, which perceive the temporal information and thus facilitate the sequence alignment. In addition to pre-training on the video and paragraph, our approach can also generalize on the matching between different video instances. We evaluate our approach on video retrieval, action step localization, and few-shot action recognition, and achieve consistent performance gain over all three tasks. Detailed ablation studies are provided to justify the approach design.
translated by 谷歌翻译
很少有射击对象检测(FSOD),目的是使用很少的培训示例来检测新颖的对象,最近对社区引起了极大的研究兴趣。基于度量学习的方法已证明使用基于两分支的暹罗网络对此任务有效,并计算图像区域之间的相似性和几乎没有射击示例以进行检测。但是,在以前的工作中,两个分支之间的相互作用仅在检测头中受到限制,而将其余数百个层留在单独的特征提取中。受到有关视觉变压器和视觉变压器的最新工作的启发,我们通过将交叉转换器纳入功能骨干和检测头中,提出了一种新颖的FSOD基于跨变速器的模型(FCT)。提出了不对称批次的交叉注意,以从不同批次大小的两个分支中汇总关键信息。我们的模型可以通过引入多级交互来改善两个分支之间的几个相似性学习。对Pascal VOC和MSCOCO FSOD基准测试的全面实验证明了我们模型的有效性。
translated by 谷歌翻译
少量对象检测(FSOD)旨在使用少数示例来检测从未见过的对象。通过学习如何在查询图像和少量拍摄类示例之间进行匹配,因此可以通过学习如何匹配来实现最近的改进,使得学习模型可以概括为几滴新颖的类。然而,目前,大多数基于元学习的方法分别在查询图像区域(通常是提议)和新颖类之间执行成对匹配,因此无法考虑它们之间的多个关系。在本文中,我们使用异构图卷积网络提出了一种新颖的FSOD模型。通过具有三种不同类型的边缘的所有提议和类节点之间的有效消息,我们可以获得每个类的上下文感知提案功能和查询 - 自适应,多包子增强型原型表示,这可能有助于促进成对匹配和改进的最终决赛FSOD精度。广泛的实验结果表明,我们所提出的模型表示为QA的Qa-Netwet,优于不同拍摄和评估指标下的Pascal VOC和MSCOCO FSOD基准测试的当前最先进的方法。
translated by 谷歌翻译
少量对象检测(FSOD)旨在仅使用几个例子来检测对象。如何将最先进的对象探测器适应几个拍摄域保持挑战性。对象提案是现代物体探测器中的关键成分。然而,使用现有方法对于几张拍摄类生成的提案质量远远差,而不是许多拍摄类,例如,由于错误分类或不准确的空间位置而导致的少量拍摄类丢失的框。为了解决嘈杂的提案问题,我们通过联合优化几次提案生成和细粒度的少量提案分类,提出了一种新的Meta学习的FSOD模型。为了提高几张拍摄类的提议生成,我们建议学习基于轻量级的公制学习的原型匹配网络,而不是传统的简单线性对象/非目标分类器,例如,在RPN中使用。我们具有特征融合网络的非线性分类器可以提高鉴别性原型匹配和少拍摄类的提案回忆。为了提高细粒度的少量提案分类,我们提出了一种新的细节特征对准方法,以解决嘈杂的提案和少量拍摄类之间的空间未对准,从而提高了几次对象检测的性能。同时,我们学习一个单独的R-CNN检测头,用于多射击基础类,并表现出维护基础课程知识的强大性能。我们的模型在大多数射击和指标上实现了多个FSOD基准的最先进的性能。
translated by 谷歌翻译
我们研究了很少的开放式识别(FSOR)的问题,该问题学习了一个能够快速适应新类的识别系统,具有有限的标签示例和对未知负样本的拒绝。由于数据限制,传统的大规模开放式方法对FSOR问题有效无效。当前的FSOR方法通常校准了几个弹出封闭式分类器对负样品敏感的,因此可以通过阈值拒绝它们。但是,阈值调整是一个具有挑战性的过程,因为不同的FSOR任务可能需要不同的拒绝功能。在本文中,我们提出了任务自适应的负面类别设想,以使FSOR集成阈值调整到学习过程中。具体而言,我们增加了几个封闭式分类器,并使用少量示例产生的其他负面原型。通过在负生成过程中纳入很少的类相关性,我们可以学习FSOR任务的动态拒绝边界。此外,我们将我们的方法扩展到概括的少数开放式识别(GFSOR),该识别需要在许多射击和少数类别上进行分类以及拒绝​​负样本。公共基准的广泛实验验证了我们在这两个问题上的方法。
translated by 谷歌翻译
This work studies the multi-task functional linear regression models where both the covariates and the unknown regression coefficients (called slope functions) are curves. For slope function estimation, we employ penalized splines to balance bias, variance, and computational complexity. The power of multi-task learning is brought in by imposing additional structures over the slope functions. We propose a general model with double regularization over the spline coefficient matrix: i) a matrix manifold constraint, and ii) a composite penalty as a summation of quadratic terms. Many multi-task learning approaches can be treated as special cases of this proposed model, such as a reduced-rank model and a graph Laplacian regularized model. We show the composite penalty induces a specific norm, which helps to quantify the manifold curvature and determine the corresponding proper subset in the manifold tangent space. The complexity of tangent space subset is then bridged to the complexity of geodesic neighbor via generic chaining. A unified convergence upper bound is obtained and specifically applied to the reduced-rank model and the graph Laplacian regularized model. The phase transition behaviors for the estimators are examined as we vary the configurations of model parameters.
translated by 谷歌翻译
张张量强大的主成分分析(TRPCA)旨在恢复因稀疏噪声破坏的低排名张量,在许多真实应用中引起了很多关注。本文开发了一种新的全球加权TRPCA方法(GWTRPCA),该方法是第一种同时考虑额外域内切片和额叶间切片奇异值的重要性。利用这些全球信息,GWTRPCA惩罚了较大的单数值,并为其分配了较小的权重。因此,我们的方法可以更准确地恢复低管级组件。此外,我们提出了通过改良的考奇估计量(MCE)的有效自适应学习策略,因为重量设置在GWTRPCA的成功中起着至关重要的作用。为了实现GWTRPCA方法,我们使用乘数的交替方向方法(ADMM)方法设计了一种优化算法。对现实世界数据集的实验验证了我们提出的方法的有效性。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译